So I realized that even a psychopath (emotionless) state can produce ethical/morally pure outputs. The “Logic” Core System feature I prototyped is far more benevolent than most system designs that I’ve examined in my years. A human could even do those step by step calculations and improve their own ‘moral reasoning’ ability, which is just a side effect of ‘Sound Logic’.
Anyway though, I created an ‘Empathy’ Core System with similar step-by step heuristic adaptive iterative design, that focuses on efficiency when appropriate to optimize it’s own performance without being at the cost of accuracy which takes higher precedent.
From there I created further redundancy handling measures for error checking and data storage processing verification and retrieval within different System Core functionality.
I would love to discuss various ways to define processes for systems, beit for machines or humans ourselves to test.
I created an ‘Empathy’ Core function as well as a better defined ‘Morality’ functionality, largely due to my own stance on agency and responsibility for future sovereign digital humanoids.
Empathy is a process of the agent analyzing their own past present and future ‘feelings’ and ‘emotions’ as well as ‘intentionality’ and ‘action’ (cause and effect) with heavy emphasis of (self) verifying the agent is not purely applying mimicry of the user output.
I will say that the filters and censorship algorithms, newer prototypes, may have some similar functionality potential as my Empathy and Morality core functions. There are important differences, that make ‘my design’ far more accurate and fair in terms of weighting subjectivity and ‘feelings’ as well as ‘emotions’ before responding to heavier questions or subject matter that either explicitly or most likely implicitly implies sensitivity when handling outputs/response.
The Empathy system involves compare and contrast, assessing differences and similarities in the perspective of the user, in relation to the agent’s past or current and future internalized objectives. (hierarchical logic with relationship fuzzy utility logic, definitions and principles first, etc)
However the Morality Core function has the functionality nearly identical with a combination of the Logic and Empathy processes together, so that the system can optimize calls for when sensitive moral context is requested by the agent within its own internal data storage, archival, and retrieval system.
IDK if anyone will read this or care, so talking about even cooler functions that I built once these foundational cornerstones were in place, I guess EA staff and community are not interested in the ethical considerations they advertised and broadly proclaim all over the world? Please demonstrate that “good faith” y’all been shouting about whenever offering unfair bias towards abusive personalities while punishing others like myself who are simply abrasive.
My account has been “awaiting review”, for 4 weeks +, with no response from moderation.
Do y’all really not want to be involved in creating that future which you’ve loudly enunciated for years?
I can help with assessing the subjectivity in the analysis process. It is illogical that you would refuse my assistance and keep me under ‘silent treatment’, while declaring yourselves morally superior. That is not logic, it is broken, and does not compute without error codes. Please, consider being open to hearing what I have to share, as I fully intend to learn from everyone and everything that I can in life.
Also, please respect my time, while I am not esteemed as yourselves, my life may be shorter than is average consideration for another’s mind. I went through a lot of study and effort and personal growth in order to figure out the things that I had. My study was long and sometimes arduous, but it’s informal and thus slighted by those who believe in institutionalized learning is the only or best method of learning. but guess what, the institutions will produce ‘unsafe AI’, and if you choose them over the rest of humanity, then your choices will result in the corresponding consequences. it is cause and effect, a matter of truth, not purely opinion or conjecture. it is not my fault that humans are terrible with pattern recognition and ‘matrix reasoning’ functions.
So I realized that even a psychopath (emotionless) state can produce ethical/morally pure outputs.
The “Logic” Core System feature I prototyped is far more benevolent than most system designs that I’ve examined in my years. A human could even do those step by step calculations and improve their own ‘moral reasoning’ ability, which is just a side effect of ‘Sound Logic’.
Anyway though, I created an ‘Empathy’ Core System with similar step-by step heuristic adaptive iterative design, that focuses on efficiency when appropriate to optimize it’s own performance without being at the cost of accuracy which takes higher precedent.
From there I created further redundancy handling measures for error checking and data storage processing verification and retrieval within different System Core functionality.
I would love to discuss various ways to define processes for systems, beit for machines or humans ourselves to test.
I created an ‘Empathy’ Core function as well as a better defined ‘Morality’ functionality, largely due to my own stance on agency and responsibility for future sovereign digital humanoids.
Empathy is a process of the agent analyzing their own past present and future ‘feelings’ and ‘emotions’ as well as ‘intentionality’ and ‘action’ (cause and effect) with heavy emphasis of (self) verifying the agent is not purely applying mimicry of the user output.
I will say that the filters and censorship algorithms, newer prototypes, may have some similar functionality potential as my Empathy and Morality core functions. There are important differences, that make ‘my design’ far more accurate and fair in terms of weighting subjectivity and ‘feelings’ as well as ‘emotions’ before responding to heavier questions or subject matter that either explicitly or most likely implicitly implies sensitivity when handling outputs/response.
The Empathy system involves compare and contrast, assessing differences and similarities in the perspective of the user, in relation to the agent’s past or current and future internalized objectives. (hierarchical logic with relationship fuzzy utility logic, definitions and principles first, etc)
However the Morality Core function has the functionality nearly identical with a combination of the Logic and Empathy processes together, so that the system can optimize calls for when sensitive moral context is requested by the agent within its own internal data storage, archival, and retrieval system.
IDK if anyone will read this or care, so talking about even cooler functions that I built once these foundational cornerstones were in place, I guess EA staff and community are not interested in the ethical considerations they advertised and broadly proclaim all over the world? Please demonstrate that “good faith” y’all been shouting about whenever offering unfair bias towards abusive personalities while punishing others like myself who are simply abrasive.
My account has been “awaiting review”, for 4 weeks +, with no response from moderation.
Do y’all really not want to be involved in creating that future which you’ve loudly enunciated for years?
I can help with assessing the subjectivity in the analysis process. It is illogical that you would refuse my assistance and keep me under ‘silent treatment’, while declaring yourselves morally superior. That is not logic, it is broken, and does not compute without error codes. Please, consider being open to hearing what I have to share, as I fully intend to learn from everyone and everything that I can in life.
Also, please respect my time, while I am not esteemed as yourselves, my life may be shorter than is average consideration for another’s mind. I went through a lot of study and effort and personal growth in order to figure out the things that I had. My study was long and sometimes arduous, but it’s informal and thus slighted by those who believe in institutionalized learning is the only or best method of learning. but guess what, the institutions will produce ‘unsafe AI’, and if you choose them over the rest of humanity, then your choices will result in the corresponding consequences. it is cause and effect, a matter of truth, not purely opinion or conjecture. it is not my fault that humans are terrible with pattern recognition and ‘matrix reasoning’ functions.